4,169 research outputs found

    High-Sn-content GeSn Alloy towards Room-temperature Mid Infrared Laser

    Get PDF
    Si photonics is a rapidly expanding technology that integrates photonic circuits onto a Si substrate. The integration of Si electronics and photonics has been a successful technology for a wide range of applications. Group-IV alloy GeSn has drawn great attentions as a complementary metal–oxide–semiconductor compatible optoelectronic material for Si photonics. The devices based on GeSn alloy could be monolithically integrated into well-established and high-yield Si integrated circuits, which is favorable for chip-scale Si photonics featuring smaller size, lower cost, and higher reliability. The relaxed GeSn with high material quality and high Sn composition is highly desirable to cover mid-infrared wavelength. A systematic study of GeSn strain relaxation mechanism and its effects on Sn incorporation during the epitaxy via chemical vapor deposition was conducted. It was discovered that Sn incorporation into Ge lattice sites is limited by high compressive strain rather than historically acknowledged chemical reaction dynamics, which was also confirmed by Gibbs free energy calculation. Following the discovered growth mechanism, a world-record Sn content of 22.3% was achieved. Even higher Sn content could be obtained if further continuous growth with the same recipe is conducted. The GeSn laser with higher Sn content is highly desired to cover longer wavelength in mid-infrared. This work demonstrated optically pumped edge-emitting GeSn lasers under two different pumping lasers with 1064 and 1950 nm wavelengths. The device structure featured Sn compositional graded with the maximum Sn content of 22.3%. Under the 1950 nm pumping laser, the GeSn laser achieved the world-record near room temperature lasing (270 K). The corresponding lasing wavelength has been extended up to 3442 nm, an unprecedented GeSn lasing wavelength so far in the world. The GeSn/GeSn/GeSn single and double quantum wells were also investigated to further improve laser performance. The unintentional Ge interlayer between barrier and well region of QW structure was removed by introducing the GeSn with variable Sn content as the buffer layer. As a result, the QW structure was demonstrated as the true type-I and direct bandgap structure, which is advantageous for the optoelectronic devices

    Effect of carbon nanotube doping on critical current density of MgB2 superconductor

    Get PDF
    The effect of doping MgB2 with carbon nanotubes on transition temperature, lattice parameters, critical current density and flux pinning was studied for MgB2-xCx with x = 0, 0.05, 0.1, 0.2 and 0.3. The carbon substitution for B was found to enhance Jc in magnetic fields but depress Tc. The depression of Tc, which is caused by the carbon substitution for B, increases with increasing doping level, sintering temperature and duration. By controlling the extent of the substitution and addition of carbon nanotubes we can achieve the optimal improvement on critical current density and flux pinning in magnetic fields while maintaining the minimum reduction in Tc. Under these conditions, Jc was enhanced by two orders of magnitude at 8T and 5K and 7T and 10K. Jc was more than 10,000A/cm2 at 20K and 4T and 5K and 8.5T, respectively

    SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering

    Full text link
    Version information plays an important role in spreadsheet understanding, maintaining and quality improving. However, end users rarely use version control tools to document spreadsheet version information. Thus, the spreadsheet version information is missing, and different versions of a spreadsheet coexist as individual and similar spreadsheets. Existing approaches try to recover spreadsheet version information through clustering these similar spreadsheets based on spreadsheet filenames or related email conversation. However, the applicability and accuracy of existing clustering approaches are limited due to the necessary information (e.g., filenames and email conversation) is usually missing. We inspected the versioned spreadsheets in VEnron, which is extracted from the Enron Corporation. In VEnron, the different versions of a spreadsheet are clustered into an evolution group. We observed that the versioned spreadsheets in each evolution group exhibit certain common features (e.g., similar table headers and worksheet names). Based on this observation, we proposed an automatic clustering algorithm, SpreadCluster. SpreadCluster learns the criteria of features from the versioned spreadsheets in VEnron, and then automatically clusters spreadsheets with the similar features into the same evolution group. We applied SpreadCluster on all spreadsheets in the Enron corpus. The evaluation result shows that SpreadCluster could cluster spreadsheets with higher precision and recall rate than the filename-based approach used by VEnron. Based on the clustering result by SpreadCluster, we further created a new versioned spreadsheet corpus VEnron2, which is much bigger than VEnron. We also applied SpreadCluster on the other two spreadsheet corpora FUSE and EUSES. The results show that SpreadCluster can cluster the versioned spreadsheets in these two corpora with high precision.Comment: 12 pages, MSR 201

    Thresh: A Unified, Customizable and Deployable Platform for Fine-Grained Text Evaluation

    Full text link
    Fine-grained, span-level human evaluation has emerged as a reliable and robust method for evaluating text generation tasks such as summarization, simplification, machine translation and news generation, and the derived annotations have been useful for training automatic metrics and improving language models. However, existing annotation tools implemented for these evaluation frameworks lack the adaptability to be extended to different domains or languages, or modify annotation settings according to user needs. And the absence of a unified annotated data format inhibits the research in multi-task learning. In this paper, we introduce Thresh, a unified, customizable and deployable platform for fine-grained evaluation. By simply creating a YAML configuration file, users can build and test an annotation interface for any framework within minutes -- all in one web browser window. To facilitate collaboration and sharing, Thresh provides a community hub that hosts a collection of fine-grained frameworks and corresponding annotations made and collected by the community, covering a wide range of NLP tasks. For deployment, Thresh offers multiple options for any scale of annotation projects from small manual inspections to large crowdsourcing ones. Additionally, we introduce a Python library to streamline the entire process from typology design and deployment to annotation processing. Thresh is publicly accessible at https://thresh.tools

    Improving Large-scale Paraphrase Acquisition and Generation

    Full text link
    This paper addresses the quality issues in existing Twitter-based paraphrase datasets, and discusses the necessity of using two separate definitions of paraphrase for identification and generation tasks. We present a new Multi-Topic Paraphrase in Twitter (MultiPIT) corpus that consists of a total of 130k sentence pairs with crowdsoursing (MultiPIT_crowd) and expert (MultiPIT_expert) annotations using two different paraphrase definitions for paraphrase identification, in addition to a multi-reference test set (MultiPIT_NMR) and a large automatically constructed training set (MultiPIT_Auto) for paraphrase generation. With improved data annotation quality and task-specific paraphrase definition, the best pre-trained language model fine-tuned on our dataset achieves the state-of-the-art performance of 84.2 F1 for automatic paraphrase identification. Furthermore, our empirical results also demonstrate that the paraphrase generation models trained on MultiPIT_Auto generate more diverse and high-quality paraphrases compared to their counterparts fine-tuned on other corpora such as Quora, MSCOCO, and ParaNMT.Comment: The project webpage is at http://twitter-paraphrase.com/ Accepted at EMNLP 202
    • …
    corecore